4 - Deep Learning [ID:9679]
50 von 700 angezeigt

Okay, so welcome everybody to our deep learning lecture.

And today we want to continue our journey through the realm of deep learning and we

want to look a little into activation functions and convolutional neural networks.

So this is now where we start really going into the direction of deep learning.

So these are essentially the final ingredients that we still need to really start building

deep networks.

So first we discuss shortly the activation functions that are different from losses because

these are the functions that we, the non-linearities within the network.

And then we will discuss convolutional neural networks and in particular the convolutional

layers and pooling layers that allow us to bring a certain degree of abstraction into

such a network.

Okay so let's start with the activations and as we always like to do when talking about

neural networks we like to talk about biology and this is all motivated in some biological

sense and we have those neurons and they kind of also have this biological motivation that

we have some activation potential within one of those cells and then the activation is

somehow triggered and that is then transported to other neurons.

And in particular this is done when the activation level within one of those cells exceeds a

certain threshold and you can see here this is a process that then happens over time so

when the input activations exceed a certain threshold then there is this depolarization

that triggers this action potential that is essentially traveling through the neuron and

then we have a repolarization and a short refractory period such that we can return

to resting state.

So in a biological neural cell you have these effects over time and these are triggered

and if you have a strong activation it doesn't matter there is always the same activation

potential coming out.

If your activation is not strong enough there is no potential triggered but if the threshold

is exceeded there is an action potential coming out of the cell.

So this is then guided essentially here through a connection and then finally through a synapse

to the next cell and these connections are actually insulated so they are these sworn

cells and these cells here have this myelin sheath and the myelin sheath kind of provides

insulation to the connections such that the action potentials can travel faster and without

losing too much energy.

So this is a quite interesting process and by the way if you are suffering from particular

diseases like multiple sclerosis then this myelin insulation is degraded and causes then

specific disease so there is really neurological diseases associated with certain malfunctions

on cellular level on the brain.

So this is about what I want to tell about the biological analogy because we want to

stay in the domain of algorithms but we can at least see in biology that the knowledge

lies in the connections, there are inhibitory and excitatory connections and these synapses

intrinsically biologically are constructed in this feed forward manner as we are also

constructing this in our network that we have this layer by layer principle but to be honest

in the biological brain these connections can be in any direction so we don't have

this layered structure but at least we have this feed forward structure.

Then the sum of the activations is crucial and if you have a significant or a sufficient

amount of activation then one neuron will fire and these activations are spikes and

they are always with a specified intensity so there is no variation of intensity but

if you have significant activation there may be multiple spikes after each other so the

information is also encoded over time domain which we currently not do we just have feed

forward and therefore we need a different way of encoding the information and we do

that for example with different activation functions.

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:17:15 Min

Aufnahmedatum

2018-11-06

Hochgeladen am

2018-11-08 15:06:58

Sprache

en-US

Deep Learning (DL) has attracted much interest in a wide range of applications such as image recognition, speech recognition and artificial intelligence, both from academia and industry. This lecture introduces the core elements of neural networks and deep learning, it comprises:

  • (multilayer) perceptron, backpropagation, fully connected neural networks

  • loss functions and optimization strategies

  • convolutional neural networks (CNNs)

  • activation functions

  • regularization strategies

  • common practices for training and evaluating neural networks

  • visualization of networks and results

  • common architectures, such as LeNet, Alexnet, VGG, GoogleNet

  • recurrent neural networks (RNN, TBPTT, LSTM, GRU)

  • deep reinforcement learning

  • unsupervised learning (autoencoder, RBM, DBM, VAE)

  • generative adversarial networks (GANs)

  • weakly supervised learning

  • applications of deep learning (segmentation, object detection, speech recognition, ...)

Tags

networks learning neural convolution representation linear layers function
Einbetten
Wordpress FAU Plugin
iFrame
Teilen